首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   794篇
  免费   120篇
  国内免费   66篇
电工技术   7篇
综合类   195篇
化学工业   16篇
金属工艺   6篇
机械仪表   4篇
建筑科学   20篇
矿业工程   11篇
能源动力   36篇
轻工业   9篇
水利工程   8篇
石油天然气   2篇
武器工业   2篇
无线电   92篇
一般工业技术   39篇
冶金工业   14篇
原子能技术   2篇
自动化技术   517篇
  2024年   2篇
  2023年   14篇
  2022年   22篇
  2021年   36篇
  2020年   32篇
  2019年   13篇
  2018年   22篇
  2017年   37篇
  2016年   56篇
  2015年   54篇
  2014年   94篇
  2013年   58篇
  2012年   74篇
  2011年   90篇
  2010年   81篇
  2009年   68篇
  2008年   125篇
  2007年   33篇
  2006年   36篇
  2005年   16篇
  2004年   3篇
  2002年   3篇
  2001年   3篇
  2000年   2篇
  1999年   3篇
  1997年   2篇
  1991年   1篇
排序方式: 共有980条查询结果,搜索用时 31 毫秒
1.
This paper introduces the design of a hardware efficient reconfigurable pseudorandom number generator (PRNG) using two different feedback controllers based four-dimensional (4D) hyperchaotic systems i.e. Hyperchaotic-1 and -2 to provide confidentiality for digital images. The parameter's value of these two hyperchaotic systems is set to be a specific value to get the benefits i.e. all the multiplications (except a few multiplications) are performed using hardwired shifting operations rather than the binary multiplications, which doesn't utilize any hardware resource. The ordinary differential equations (ODEs) of these two systems have been exploited to build a generic architecture that fits in a single architecture. The proposed architecture provides an opportunity to switch between two different 4D hyperchaotic systems depending on the required behavior. To ensure the security strength, that can be also used in the encryption process in which encrypt the input data up to two times successively, each time using a different PRNG configuration. The proposed reconfigurable PRNG has been designed using Verilog HDL, synthesized on the Xilinx tool using the Virtex-5 (XC5VLX50T) and Zynq (XC7Z045) FPGA, its analysis has been done using Matlab tool. It has been found that the proposed architecture of PRNG has the best hardware performance and good statistical properties as it passes all fifteen NIST statistical benchmark tests while it can operate at 79.101-MHz or 1898.424-Mbps and utilize only 0.036 %, 0.23 %, and 1.77 % from the Zynq (XC7Z045) FPGA's slice registers, slice LUTs, and DSP blocks respectively. Utilizing these PRNGs, we design two 16 × 16 substitution boxes (S-boxes). The proposed S-boxes fulfill the following criteria: Bijective, Balanced, Non-linearity, Dynamic Distance, Strict Avalanche Criterion (SAC) and BIC non-linearity criterion. To demonstrate these PRNGs and S-boxes, a new three different scheme of image encryption algorithms have been developed: a) Encryption using S-box-1, b) Encryption using S-box-2 and, c) Two times encryption using S-box-1 and S-box-2. To demonstrate that the proposed cryptosystem is highly secure, we perform the security analysis (in terms of the correlation coefficient, key space, NPCR, UACI, information entropy and image encryption quantitatively in terms of (MSE, PSNR and SSIM)).  相似文献   
2.
Video transmission over IEEE 802.11e wireless networks still shows poor performance for large bandwidth demand and frequently changed environments. Thus, several enhancements of IEEE 802.11e were proposed. On the other hand, big frames and simultaneous sending of adjacent frames always cause packet dropping for buffer overflow. In the past, we proposed an IEEE 802.11e enhancement named DFAA and a content aware mechanism to solve the above problems. The motivation of this paper is to find a proper way to integrate these two mechanisms. A DFAA enhancement (DFAA-E) is proposed to make up the insufficiency of content aware mechanism. Experiments results show that the combination of DFAA-E and content aware mechanism improves the video decoded quality greatly. And its performance can be further enhanced by selecting the suitable settings of certain parameters.  相似文献   
3.
传统的图像增强方法对低曝光图像进行增强时,通常只考虑到了亮度的提升,忽略了增强过程中带来的噪声放大问题.而当前基于深度学习的方法利用端到端的网络直接学习低曝光图像到正常图像的映射关系,忽略了低曝光图像形成的物理原理,也没有考虑解决噪声放大的问题.针对上述问题,本文通过对图像降质的本质原因进行分析,提出一种基于渐进式双网络模型的低曝光图像增强方法,该方法包含图像增强模块以及图像去噪模块两个部分.对每个模块的构建也采用了渐进式的思想,考虑了图像由暗到亮的亮度变化,以及从粗到细的图像恢复过程,使增强后的结果更接近真实图像.为了更好地训练网络,本文构建了一种双向约束损失函数,从图像降质模型的正反两个方向使网络学习结果逼近真实数据,达到动态平衡.为了验证本文方法的有效性,本文与一些主流的方法从主观和客观两方面进行了实验对比,实验结果证明了本文方法得到的结果更接近真实图像,获得了更优的性能指标.  相似文献   
4.
心理健康问题已经成为当今社会关注的焦点,它严重威胁着家庭和睦与社会稳定.有心理危机的用户经常通过特定的社区论坛或者社交媒体来求助或倾述,这为用户心理危机识别开辟了一个新的途径.论坛帖子长短不一,但判断心理危机的核心信息往往体现在局部内容上,基于此特点,本文构建了一个结合分层长短记忆网络和卷积神经网络的多层局部信息融合模型(Multi-layer Partial Information Fusion model,MPIF),利用论坛用户发布的帖子,检测用户的心理危机严重程度.模型的特点在于:1)利用预训练语言模型BERT对用户帖子中的句子进行向量化表示,充分考虑词语在不同语境中的不同含义表达;2)分别从词、短语、以及句子层面挖掘反映用户心理危机状态的信息,采用深度分层LSTM网络和注意力机制相结合的方式来获取待分类帖子中词语层面以及句子层面的局部信息,利用CNN网络中多种大小不同的卷积核来提取帖子中短语层面的局部信息;3)采用注意力机制和最大池化层,使得模型不仅能够有效地利用局部信息给出心理危机程度的判断,同时可以将这些局部信息展示给心理专家,辅助专家更快了解患者.基于CLPsych2019 Shared Task评测任务的实验结果显示,与评测时排名第一的模型相比,MPIF模型的官方评测指标All-F1值(自杀风险程度a,b,c,d 4个类别的F1值取平均)高出3.9%.经消融实验发现,去除LSTM词语层、CNN短语层、LSTM句子层,All-F1分别下降4%、4.3%、2.4%.  相似文献   
5.
对基于构件结构复杂度较高的Web系统进行可靠性评估时,基于状态或基于路径的软件可靠性评估模型计算复杂度较高,鲁棒性不足。为此,提出了一种计算复杂度低、鲁棒性强的基于构件的前馈神经网络可靠性模型CBPRM。CBPRM将Web系统中各构件的可靠性作为前馈神经网络输入,并基于构件可靠性敏感度对神经元进行动态优化,Web系统可靠性评估由前馈神经网络输出实现。理论分析和实验结果表明,在基于构件结构复杂度较高的Web系统可靠性评估中,CBPRM的计算复杂度低于对比模型,并可确保可靠性评估精度。  相似文献   
6.
To enable the immediate and efficient dispatch of relief to victims of disaster, this study proposes a greedy-search-based, multi-objective, genetic algorithm capable of regulating the distribution of available resources and automatically generating a variety of feasible emergency logistics schedules for decision-makers. The proposed algorithm dynamically adjusts distribution schedules from various supply points according to the requirements at demand points in order to minimize unsatisfied demand for resources, time to delivery, and transportation costs. The proposed algorithm was applied to the case of the Chi–Chi earthquake in Taiwan to verify its performance. Simulation results demonstrate that under conditions of a limited/unlimited number of available vehicles, the proposed algorithm outperforms the MOGA and standard greedy algorithm in ‘time to delivery’ by an average of 63.57% and 46.15%, respectively, based on 10,000 iterations.  相似文献   
7.
The viability of networked communities depends on the creation and disclosure of user-generated content and the frequency of user visitation (Facebook 10-K Annual Report, 2012). However, little is known about how to align the interests of user and social networking sites. In this study, we draw upon the principal-agent perspective to extend Pavlou et al.’s uncertainty mitigation model of online exchange relationships (2007) and propose an empirically tested model for aligning the incentives of the principal (user) and the agent (service provider). As suggested by Pavlou et al., we incorporated a multi-dimensional measure of trust: trust of provider and trust of members. The proposed model is empirically tested with survey data from 305 adults aged 20-55. The results support our model, delineating how real individuals with bounded rationality actually make decision about information disclosure under uncertainty in the social networking site context. There is show little to no relationship between online privacy concerns and information disclosure on online social network sites. Perceived benefits provide the linkage between the incentives of principal (user) and agent (provider) while usage intensity demonstrated the most significant impact on information disclosure. We argue that the phenomenon may be explained through Communication Privacy Management Theory. The present study enhances our understanding of agency theory and human judgment theory in the context of social media. Practical implications for understanding and facilitating online social exchange relationships are also discussed.  相似文献   
8.
In this paper, we consider interactive fuzzy programming for multi-level 0–1 programming problems involving random variable coefficients both in objective functions and constraints. Following the probability maximization model together with the concept of chance constraints, the formulated stochastic multi-level 0–1 programming problems are transformed into deterministic ones. Taking into account vagueness of judgments of the decision makers, we present interactive fuzzy programming. In the proposed interactive method, after determining the fuzzy goals of the decision makers at all levels, a satisfactory solution is derived efficiently by updating satisfactory levels of the decision makers with considerations of overall satisfactory balance among all levels. For solving the transformed deterministic problems efficiently, we also introduce novel tabu search for general 0–1 programming problems. A numerical example for a three-level 0–1 programming problem is provided to illustrate the proposed method.  相似文献   
9.
Partitioning the universe of discourse and determining intervals containing useful temporal information and coming with better interpretability are critical for forecasting in fuzzy time series. In the existing literature, researchers seldom consider the effect of time variable when they partition the universe of discourse. As a result, and there is a lack of interpretability of the resulting temporal intervals. In this paper, we take the temporal information into account to partition the universe of discourse into intervals with unequal length. As a result, the performance improves forecasting quality. First, time variable is involved in partitioning the universe through Gath–Geva clustering-based time series segmentation and obtain the prototypes of data, then determine suitable intervals according to the prototypes by means of information granules. An effective method of partitioning and determining intervals is proposed. We show that these intervals carry well-defined semantics. To verify the effectiveness of the approach, we apply the proposed method to forecast enrollment of students of Alabama University and the Taiwan Stock Exchange Capitalization Weighted Stock Index. The experimental results show that the partitioning with temporal information can greatly improve accuracy of forecasting. Furthermore, the proposed method is not sensitive to its parameters.  相似文献   
10.
A concept lattice is an ordered structure between concepts. It is particularly effective in mining association rules. However, a concept lattice is not efficient for large databases because the lattice size increases with the number of transactions. Finding an efficient strategy for dynamically updating the lattice is an important issue for real-world applications, where new transactions are constantly inserted into databases. To build an efficient storage structure for mining association rules, this study proposes a method for building the initial frequent closed itemset lattice from the original database. The lattice is updated when new transactions are inserted. The number of database rescans over the entire database is reduced in the maintenance process. The proposed algorithm is compared with building a lattice in batch mode to demonstrate the effectiveness of the proposed algorithm.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号